130 research outputs found
Co-exposure of C60 fullerene with benzo[a]pyrene results in enhanced biological effects in cells as determined by Fourier-transform infrared spectroscopy
C60 fullerene (C60) is a promising manufactured carbon-based nanoparticles (NPs). With an increasing number of applications, it is being found in the environment. In addition, C60 is likely to associate with other environmental toxic contaminants. How such interactions with C60 can impact on the environmental fate, transport and bioavailability of toxicants remains unknown. Benzo[a]pyrene (B[a]P) is a polycyclic aromatic hydrocarbon (PAH). Herein, two cell lines (fish gill or MCF-7 cells) were employed to explore the biological impacts of co-exposure to C60 and B[a]P. Post-exposure cells were interrogated using Fourier-transformation infrared (FTIR) microspectroscopy. By inputting spectral data into principal component analysis and linear discriminant analysis, data reduction allowed for visualisation of cell categorization and identification of wavenumber-related biomarkers corresponding to cellular alterations. Our results indicate that low-dose C60 increases B[a]P-induced alterations, while C60 at high concentrations reduces these effects. We also found that although C60 co-exposure increases B[a]P-induced CYP1A1 induction, co-exposure seemingly attenuates the levels of oxidative damage induced by either agent singly. This suggests that interactions between environmental NPs and contaminants are complex and unpredictable
MVP: Multi-task Supervised Pre-training for Natural Language Generation
Pre-trained language models (PLMs) have achieved remarkable success in
natural language generation (NLG) tasks. Up to now, most NLG-oriented PLMs are
pre-trained in an unsupervised manner using the large-scale general corpus. In
the meanwhile, an increasing number of models pre-trained with labeled data
(i.e. "supervised pre-training") showcase superior performance compared to
unsupervised pre-trained models. Motivated by the success of supervised
pre-training, we propose Multi-task superVised Pre-training (MVP) for natural
language generation. We collect a large-scale natural language generation
corpus, MVPCorpus, from datasets over diverse NLG tasks. Then we
unify these examples into a general text-to-text format to pre-train the text
generation model MVP in a supervised manner. For each task, we further
pre-train specific soft prompts to stimulate the model's capacity to perform a
specific task. Our MVP model can be seen as a practice that utilizes recent
instruction tuning on relatively small PLMs. Extensive experiments have
demonstrated the effectiveness and generality of our MVP model in a number of
NLG tasks, which achieves state-of-the-art performance on out of
datasets, outperforming BART by and Flan-T5 by .Comment: Accepted by ACL 202
Constrained Reinforcement Learning for Dynamic Material Handling
As one of the core parts of flexible manufacturing systems, material handling
involves storage and transportation of materials between workstations with
automated vehicles. The improvement in material handling can impulse the
overall efficiency of the manufacturing system. However, the occurrence of
dynamic events during the optimisation of task arrangements poses a challenge
that requires adaptability and effectiveness. In this paper, we aim at the
scheduling of automated guided vehicles for dynamic material handling.
Motivated by some real-world scenarios, unknown new tasks and unexpected
vehicle breakdowns are regarded as dynamic events in our problem. We formulate
the problem as a constrained Markov decision process which takes into account
tardiness and available vehicles as cumulative and instantaneous constraints,
respectively. An adaptive constrained reinforcement learning algorithm that
combines Lagrangian relaxation and invalid action masking, named RCPOM, is
proposed to address the problem with two hybrid constraints. Moreover, a
gym-like dynamic material handling simulator, named DMH-GYM, is developed and
equipped with diverse problem instances, which can be used as benchmarks for
dynamic material handling. Experimental results on the problem instances
demonstrate the outstanding performance of our proposed approach compared with
eight state-of-the-art constrained and non-constrained reinforcement learning
algorithms, and widely used dispatching rules for material handling.Comment: accepted by the 2023 International Joint Conference on Neural
Networks (IJCNN
BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling Capacities of Large Language Models
Large language models (LLMs) have achieved dramatic proficiency over NLP
tasks with normal length. Recently, multiple studies have committed to
extending the context length and enhancing the long text modeling capabilities
of LLMs. To comprehensively evaluate the long context ability of LLMs, we
propose BAMBOO, a multi-task long context benchmark. BAMBOO has been designed
with four principles: comprehensive capacity evaluation, avoidance of data
contamination, accurate automatic evaluation, and different length levels. It
consists of 10 datasets from 5 different long text understanding tasks, i.e.
question answering, hallucination detection, text sorting, language modeling,
and code completion, to cover core capacities and various domains of LLMs. We
conduct experiments with five long context models on BAMBOO and further discuss
four key research questions of long text. We also qualitatively analyze current
long context models and point out future directions for enhancing long text
modeling capacities. We release our data, prompts, and code at
https://github.com/RUCAIBox/BAMBOO
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models
Large language models (LLMs), such as ChatGPT, are prone to generate
hallucinations, i.e., content that conflicts with the source or cannot be
verified by the factual knowledge. To understand what types of content and to
which extent LLMs are apt to hallucinate, we introduce the Hallucination
Evaluation benchmark for Large Language Models (HaluEval), a large collection
of generated and human-annotated hallucinated samples for evaluating the
performance of LLMs in recognizing hallucination. To generate these samples, we
propose a ChatGPT-based two-step framework, i.e., sampling-then-filtering.
Besides, we also hire some human labelers to annotate the hallucinations in
ChatGPT responses. The empirical results suggest that ChatGPT is likely to
generate hallucinated content in specific topics by fabricating unverifiable
information (i.e., about responses). Moreover, existing LLMs face
great challenges in recognizing the hallucinations in texts. However, our
experiments also prove that providing external knowledge or adding reasoning
steps can help LLMs recognize hallucinations. Our benchmark can be accessed at
https://github.com/RUCAIBox/HaluEval.Comment: Accepted to EMNLP 2023 Main Conference (Long Paper
Learning to Imagine: Visually-Augmented Natural Language Generation
People often imagine relevant scenes to aid in the writing process. In this
work, we aim to utilize visual information for composition in the same manner
as humans. We propose a method, LIVE, that makes pre-trained language models
(PLMs) Learn to Imagine for Visuallyaugmented natural language gEneration.
First, we imagine the scene based on the text: we use a diffusion model to
synthesize high-quality images conditioned on the input texts. Second, we use
CLIP to determine whether the text can evoke the imagination in a posterior
way. Finally, our imagination is dynamic, and we conduct synthesis for each
sentence rather than generate only one image for an entire paragraph.
Technically, we propose a novel plug-and-play fusion layer to obtain
visually-augmented representations for each text. Our vision-text fusion layer
is compatible with Transformerbased architecture. We have conducted extensive
experiments on four generation tasks using BART and T5, and the automatic
results and human evaluation demonstrate the effectiveness of our proposed
method. We will release the code, model, and data at the link:
https://github.com/RUCAIBox/LIVE.Comment: Accepted by ACL 202
Expression patterns and immunological characterization of PANoptosis -related genes in gastric cancer
BackgroundAccumulative studies have demonstrated the close relationship between tumor immunity and pyroptosis, apoptosis, and necroptosis. However, the role of PANoptosis in gastric cancer (GC) is yet to be fully understood.MethodsThis research attempted to identify the expression patterns of PANoptosis regulators and the immune landscape in GC by integrating the GSE54129 and GSE65801 datasets. We analyzed GC specimens and established molecular clusters associated with PANoptosis-related genes (PRGs) and corresponding immune characteristics. The differentially expressed genes were determined with the WGCNA method. Afterward, we employed four machine learning algorithms (Random Forest, Support Vector Machine, Generalized linear Model, and eXtreme Gradient Boosting) to select the optimal model, which was validated using nomogram, calibration curve, decision curve analysis (DCA), and two validation cohorts. Additionally, this study discussed the relationship between infiltrating immune cells and variables in the selected model.ResultsThis study identified dysregulated PRGs and differential immune activities between GC and normal samples, and further identified two PANoptosis-related molecular clusters in GC. These clusters demonstrated remarkable immunological heterogeneity, with Cluster1 exhibiting abundant immune infiltration. The Support Vector Machine signature was found to have the best discriminative ability, and a 5-gene-based SVM signature was established. This model showed excellent performance in the external validation cohorts, and the nomogram, calibration curve, and DCA indicated its reliability in predicting GC patterns. Further analysis confirmed that the 5 selected variables were remarkably related to infiltrating immune cells and immune-related pathways.ConclusionTaken together, this work demonstrates that the PANoptosis pattern has the potential as a stratification tool for patient risk assessment and a reflection of the immune microenvironment in GC
- …